74 research outputs found

    On-line hydraulic state prediction for water distribution systems

    Get PDF
    World Environmental and Water Resources Congress 2009: Great Rivers Proceedings of World Environmental and Water Resources Congress 2009 May 17–21, 2009 Kansas City, MissouriThis paper describes and demonstrates a method for on‐line hydraulic state prediction in urban water networks. The proposed method uses a Predictor‐Corrector (PC) approach in which a statistical data‐driven algorithm is applied to estimate future water demands, while near real‐time field measurements are used to correct (i.e., calibrate) these predicted values on‐line. The calibration problem is solved using a modified Least Squares (LS) fit method. The objective function is the minimization of the least‐squares of the differences between predicted and measured hydraulic parameters (i.e., pressure and flow rates at several system locations), with the decision variables being the consumers' water demands. The a‐priori estimation (i.e., prediction) of the values of the decision variables, which improves through experience, facilitates a better convergence of the calibration model and provides adequate information on the system's hydraulic state for real time optimization. The proposed methodology is demonstrated on a prototypical municipal water distribution system

    Alternative configurations of quantile regression for estimating predictive uncertainty in water forecasts for the upper Severn River: a comparison

    Get PDF
    The present study comprises an intercomparison of different configurations of a statistical post-processor that is used to estimate predictive hydrological uncertainty. It builds on earlier work by Weerts, Winsemius and Verkade (2011; hereafter referred to as WWV2011), who used the quantile regression technique to estimate predictive hydrological uncertainty using a deterministic water level forecast as a predictor. The various configurations are designed to address two issues with the WWV2011 implementation: (i) quantile crossing, which causes non-strictly rising cumulative predictive distributions, and (ii) the use of linear quantile models to describe joint distributions that may not be strictly linear. Thus, four configurations were built: (i) a ''classical" quantile regression, (ii) a configuration that implements a non-crossing quantile technique, (iii) a configuration where quantile models are built in normal space after application of the normal quantile transformation (NQT) (similar to the implementation used by WWV2011), and (iv) a configuration that builds quantile model separately on separate domains of the predictor. Using each configuration, four reforecasting series of water levels at 14 stations in the upper Severn River were established. The quality of these four series was intercompared using a set of graphical and numerical verification metrics. Intercomparison showed that reliability and sharpness vary across configurations, but in none of the configurations do these two forecast quality aspects improve simultaneously. Further analysis shows that skills in terms of the Brier skill score, mean continuous ranked probability skill score and relative operating characteristic score is very similar across the four configuration

    Oracle-based optimization applied to climate model calibration

    Get PDF
    In this paper, we show how oracle-based optimization can be effectively used for the calibration of an intermediate complexity climate model. In a fully developed example, we estimate the 12 principal parameters of the C-GOLDSTEIN climate model by using an oracle- based optimization tool, Proximal-ACCPM. The oracle is a procedure that finds, for each query point, a value for the goodness-of-fit function and an evaluation of its gradient. The difficulty in the model calibration problem stems from the need to undertake costly calculations for each simulation and also from the fact that the error function used to assess the goodness-of-fit is not convex. The method converges to a Fbest fit_ estimate over 10 times faster than a comparable test using the ensemble Kalman filter. The approach is simple to implement and potentially useful in calibrating computationally demanding models based on temporal integration (simulation), for which functional derivative information is not readily available

    Evolutionary algorithms and other metaheuristics in water resources: Current status, research challenges and future directions

    Get PDF
    Abstract not availableH.R. Maier, Z. Kapelan, Kasprzyk, J. Kollat, L.S. Matott, M.C. Cunha, G.C. Dandy, M.S. Gibbs, E. Keedwell, A. Marchi, A. Ostfeld, D. Savic, D.P. Solomatine, J.A. Vrugt, A.C. Zecchin, B.S. Minsker, E.J. Barbour, G. Kuczera, F. Pasha, A. Castelletti, M. Giuliani, P.M. Ree

    Practical experience of sensitivity analysis: Comparing six methods, on three hydrological models, with three performance criteria

    No full text
    Currently, practically no modeling study is expected to be carried out without some form of Sensitivity Analysis (SA). At the same time, there is a large number of various methods and it is not always easy for practitioners to choose one. The aim of this paper is to briefly review main classes of SA methods, and to present the results of the practical comparative analysis of applying them. Six different global SA methods: Sobol, eFAST (extended Fourier Amplitude Sensitivity Test), Morris, LH-OAT, RSA (Regionalized Sensitivity Analysis), and PAWN are tested on three conceptual rainfall-runoff models with varying complexity: (GR4J, Hymod, and HBV) applied to the case study of Bagmati basin (Nepal). The methods are compared with respect to effectiveness, efficiency, and convergence. A practical framework of selecting and using the SA methods is presented. The result shows that, first of all, all the six SA methods are effective. Morris and LH-OAT methods are the most efficient methods in computing SI and ranking. eFAST performs better than Sobol, and thus it can be seen as its viable alternative for Sobol. PAWN and RSA methods have issues of instability, which we think are due to the ways Cumulative Distribution Functions (CDFs) are built, and using Kolmogorov-Smirnov statistics to compute Sensitivity Indices. All the methods require sufficient number of runs to reach convergence. Difference in efficiency of different methods is an inevitable consequence of the differences in the underlying principles. For SA of hydrological models, it is recommended to apply the presented practical framework assuming the use of several methods, and to explicitly take into account the constraints of effectiveness, efficiency (including convergence), ease of use, and availability of software.Water Resource

    Committees Of Specialized Conceptual Hydrological Models: Comparative Study

    No full text
    Single hydrological model or model calibrated on single objective function often cannot capture all components of a water motion process. One possibility is building several specialized models each of which responsible for a particular sub-process (e.g., high flows or low flows), and combining them using dynamic weights – thus forming a committee model. In this study, we test two different committee models: one uses fuzzy memberships function andanother one - weights calculated from hydrological states. Specialized models are calibrated using Adaptive Cluster Covering Algorithm with different objective functions. The performances of the two different committee models are illustrated and compared.Water Resource

    Approach to robust multi-objective optimization and probabilistic analysis: The ROPAR algorithm

    No full text
    This paper considers the problem of robust optimization, and presents the technique called Robust Optimization and Probabilistic Analysis of Robustness (ROPAR). It has been developed for finding robust optimum solutions of a particular class in model-based multi-objective optimization (MOO) problems (i.e. when the objective function is not known analytically), where some of the parameters or inputs to this model are assumed to be uncertain. A Monte Carlo simulation framework is used. It can be straightforwardly implemented in a distributed computing environment which allows the results to be obtained relatively fast. The technique is exemplified in the two case studies: (a) a benchmark problem commonly used to test MOO algorithms (a version of the ZDT1 function); and (b) a design problem of a simple storm drainage system, where the uncertainty is associated with design rainfall events. It is shown that the design found by ROPAR can adequately cope with these uncertainties. The approach can be useful for assisting in a wide range of risk-based decisions.Water Resource
    • 

    corecore